CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Observer bias and other “experimenter effects” occur when researchers’ expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work “blind,” meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.
Usage Notes Evolution literature review dataExact p value datasetjournal_categoriesp values data 24 SeptProportion of significant p values per paperR script to filter and classify the p value dataQuiz answers - guessing effect size from abstractsThe answers provided by the 9 evolutionary biologists to quiz we designed, which aimed to test whether trained specialists are able to infer the relative size/direction of effect size from a paper's title and abstract.readmeDescription of the contents of all the other files in this Dryad submission.R script to statistically analyse the p value dataR script detailing the statistical analyses we performed on the p value datasets.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as "p-hacking," occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Categorization of doctoral theses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Noun tags.
Data and replication code for the paper "Exploring Gender Bias in Homicide Sentencing: An Empirical Study of Russian Court Decisions Using Text Mining"
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Age of the author, and impact on non-inclusiveness.
This data collection contains transcripts of interviews carried out with experienced human rights investigators. Throughout these semi-structured interviews, participants were invited to share their views and experiences on: the extent to which OSINT has affected investigative practices; the representativeness of open source research sources to affected populations; the tools that assist in data gathering and verification; and the challenges and opportunities presented by this type of evidence.
Technology is rapidly transforming how investigations of human rights abuses are carried out. Traditionally, investigations relied upon witness testimony and on-site evidence to prove the existence of human rights violations. More recently, however, human rights investigations have been turning to Open Source Intelligence (OSINT), such as social media content and satellite imagery, to overcome the physical, security, and societal barriers to gathering reliable evidence. In August 2017, the International Criminal Court issued its first arrest warrant based on social media evidence. OSINT has the potential to democratise the flow of information on international human rights violations in an unprecedented way. By allowing investigations to be carried out remotely, and by enabling information to be received directly from witnesses and victims rather than through intermediaries, OSINT can break down some of the barriers that have silenced some voices in traditional investigations and prioritised others. However, new issues arise with these types of investigations. The huge volume of evidence retrievable from social media can make it difficult for investigators to extract truly useful information. There are further issues of informational bias that can be attributed to algorithmic bias or to misinformation posted online, intended to obfuscate or exaggerate human rights abuses. By combining a unique multidisciplinary methodology, drawing on socio-legal, computer science, and geospatial analysis methods, this project asks: "To what extent can OSINT be leveraged to contribute more systematically to human rights investigation and documentation? Can natural language processing and geospatial methods for analysing social media content assist in the discovery and analysis processes, and help overcome potential issues of informational bias and misinformation that may arise?" It will: 1) Create the first ever overview of the use of OSINT by UN human rights fact-finding missions. Through interviews with members of UN Commissions of Inquiry and human rights investigations (many of whom we have worked with on other projects) and a project workshop, we will identify the barriers and reservations to their use of OSINT. Combining this data with a systematic review of reports produced by these investigations, we will determine the extent to which information gathered through OSINT methods could address some of the informational gaps inherent to traditional investigative methods. 2) Develop, in collaboration with human rights organisations, the Knowledge Hub Framework (KHF), a set of core microservices that will provide tools to gather data and carry out specific analytical tasks, such as comparing documents for similarity, identifying place names within free text and mapping them, and assigning weightings and confidence ratings to data sources based on automated crosschecks, validations, and historical accuracies. 3) Through the KHF, use natural language processing, text mining, and spatial analysis techniques, combined with legal analysis, in a case study to demonstrate how OSINT-based investigations could be made more systematic. Our case study will focus on The Philippines, where mass human rights violations have allegedly occurred, but which is not currently subject to a UN human rights inquiry, and which has witnessed a proliferation of social media accounts spreading counter-narratives about alleged human rights abuses. In a dedicated workshop, we will demonstrate the prototype KHF to stakeholders. We will later offer training sessions for human rights organisations. The Institute for International Criminal Investigations has agreed to host one such training session in The Hague. As well as the KHF, which will be updated as new functionalities are created, the project will result in three academic journal articles and a Guide to OSINT for Human Rights Organisations. It has the potential to transform human rights fact-finding.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Provides data about every New Zealand athlete who has competed in the Olympic Games and the Commonwealth Games. The data was collected from the New Zealand Olympic Committee website and combined with media mentions found within an archive of New Zealand media articles maintained by data science consultancy DOT loves data.
This dataset supports the conference paper, "Gender bias and the New Zealand media's reporting of elite athletes" (MathSport 2018).
Includes the following variables:
Note: the Sport variable is a variable-length list that is delimited by three spaces (" ")
Bias Tire Market Size 2024-2028
The bias tire market size is forecast to increase by USD 6.33 billion at a CAGR of 4.4% between 2023 and 2028.
The market is experiencing significant growth due to the increasing demand for bias tires in sectors such as agriculture equipment. This sector's expansion is driven by the durability and reliability of bias tires and automotive tires in heavy-duty applications. Another trend influencing the market is the rise of online retailing in the tire industry, which offers convenience and cost savings to consumers. However, environmental concerns related to bias tire manufacturing activities, such as the release of harmful emissions and excessive use of natural resources, pose challenges to market growth.
Producers must address these issues through sustainable manufacturing practices and innovation to ensure long-term market success. The bias tire market is driven by the demand for heavy-duty tires known for their durability and cost-effectiveness. These tires excel in off-road applications, making them essential for agriculture and industrial use. With the growth of the automotive industry, tire performance and maintenance have become critical, further encouraging practices like tire retreading. Emphasizing sustainability, bias tires provide reliable solutions for varied terrains while maintaining operational efficiency.
What is the Size of the Bias Tire Market During the Forecast Period?
To learn more about the market report, Request Free Sample
Bias tires, a type of pneumatic tire with a distinctive curved cross-section, continue to hold significant market share in various industries due to their unique features. The construction of bias tires involves using rubber plies, which can be made of nylon, steel, or fiberglass, that are laid in a crisscross pattern. This design enhances the tire's load-carrying capacity, making it an ideal choice for tractor applications in the agricultural sector. Automation in manufacturing processes has led to advancements in bias tire technology, enabling the production of high-performance tires for heavy machinery, trailers, and construction equipment. The global mining activity and infrastructure development projects require strong tires with superior load-carrying capacity, leading to an increased demand for bias tires.
Moreover, the market for bias tires is segmented into applications such as agricultural equipment, passenger cars, commercial vehicles, and OEMs. Retailers play a crucial role in the distribution of bias tires to end-users. The sidewall and tread designs of bias tires are engineered using fabric cords and inner plies to optimize their performance in specific applications. Despite the increasing popularity of radial tires, general bias tires continue to cater to the unique requirements of various industries.
Bias Tire Market Segmentation
The bias tire market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.
Type
Bias belted tires
General bias tires
Distribution Channel
OEM
Aftermarket
Geography
APAC
China
India
Japan
South Korea
Australia
North America
US
Canada
Mexico
Europe
Germany
France
UK
South America
Middle East and Africa
By Type Insights
The bias belted tires segment is estimated to witness significant growth during the forecast period. Bias tires, an essential component of various types of vehicles, continue to hold significant market share due to their strength and versatility. These tires, featuring rubber plies reinforced with materials like nylon, steel, and fiberglass, cater to the demands of the agricultural sector, heavy machinery, trailers, and industrial vehicles. The bias-belted and general bias tire designs offer superior load-carrying capacity for tractors, construction & mining vehicles, and pickup trucks. In the automation era, radial tires have gained popularity, but bias tires maintain their relevance. They are extensively used in farm mechanization, global mining activity, and cross-border freight transit. The layered design of bias tires, comprising fabric cords in the sidewall, tread, and inner plies, ensures durability and resistance to punctures.
The tire ply, a crucial tire component, influences rolling resistance, which is a critical factor in determining fuel efficiency. OEMs and aftermarket retailers continue to invest in research and development to enhance the performance and functionality of bias tires.
Get a glance at the market share of various segments. Request Free Sample
The Bias belted tires segment accounted for USD 13.69 billion in 2018 and showed a gradual increase during the forecast period.
Regional Insights
North America
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Artificial Intelligence (AI) market is experiencing explosive growth, driven by advancements in machine learning, data mining, and automatic driving technologies. While precise market size figures for 2025 aren't provided, considering the rapid expansion of AI across various sectors, a reasonable estimate for the total market size in 2025 is $500 billion, based on reports indicating substantial growth in recent years and projections for future expansion. Assuming a conservative Compound Annual Growth Rate (CAGR) of 25% for the forecast period (2025-2033), the market is projected to reach approximately $3.7 trillion by 2033. This significant expansion is fueled by several key factors. Firstly, the increasing availability and affordability of computing power allow for more complex AI models and applications. Secondly, the burgeoning volume of data generated across various industries provides rich fuel for AI algorithms. Thirdly, businesses across sectors, including healthcare, automotive, and manufacturing, are increasingly adopting AI to improve efficiency, optimize processes, and gain a competitive edge. The segments of Automatic Driving, Machine Learning and Data Mining are expected to be the key drivers of this growth, with applications in healthcare and automotive leading the charge. However, challenges remain. The high cost of AI development and implementation can pose a barrier to entry for smaller businesses. Concerns surrounding data privacy, algorithmic bias, and job displacement due to automation also represent potential restraints on market growth. Nevertheless, the overall trajectory indicates a sustained period of expansion, shaped by continuous innovation and widening adoption across diverse industries and geographical regions. Companies such as Uber, Airbnb, Salesforce, and others are at the forefront of this technological revolution, leveraging AI to enhance their services and operations. The regional breakdown shows a significant market presence across North America, Europe, and Asia Pacific, with further expansion anticipated in emerging markets. The market's growth is expected to remain robust as AI continues to permeate various facets of our lives, transforming industries and creating new opportunities.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Key indicators.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As there was no large publicly available cross-domain dataset for comparative argument mining, we create one composed of sentences, potentially annotated with BETTER / WORSE markers (the first object is better / worse than the second object) or NONE (the sentence does not contain a comparison of the target objects). The BETTER sentences stand for a pro-argument in favor of the first compared object and WORSE-sentences represent a con-argument and favor the second object. We aim for minimizing dataset domain-specific biases in order to capture the nature of comparison and not the nature of the particular domains, thus decided to control the specificity of domains by the selection of comparison targets. We hypothesized and could confirm in preliminary experiments that comparison targets usually have a common hypernym (i.e., are instances of the same class), which we utilized for selection of the compared objects pairs. The most specific domain we choose, is computer science with comparison targets like programming languages, database products and technology standards such as Bluetooth or Ethernet. Many computer science concepts can be compared objectively (e.g., on transmission speed or suitability for certain applications). The objects for this domain were manually extracted from List of-articles at Wikipedia. In the annotation process, annotators were asked to only label sentences from this domain if they had some basic knowledge in computer science. The second, broader domain is brands. It contains objects of different types (e.g., cars, electronics, and food). As brands are present in everyday life, anyone should be able to label the majority of sentences containing well-known brands such as Coca-Cola or Mercedes. Again, targets for this domain were manually extracted from `List of''-articles at Wikipedia.The third domain is not restricted to any topic: random. For each of 24~randomly selected seed words 10 similar words were collected based on the distributional similarity API of JoBimText (http://www.jobimtext.org). Seed words created using randomlists.com: book, car, carpenter, cellphone, Christmas, coffee, cork, Florida, hamster, hiking, Hoover, Metallica, NBC, Netflix, ninja, pencil, salad, soccer, Starbucks, sword, Tolkien, wine, wood, XBox, Yale.Especially for brands and computer science, the resulting object lists were large (4493 in brands and 1339 in computer science). In a manual inspection, low-frequency and ambiguous objects were removed from all object lists (e.g., RAID (a hardware concept) and Unity (a game engine) are also regularly used nouns). The remaining objects were combined to pairs. For each object type (seed Wikipedia list page or the seed word), all possible combinations were created. These pairs were then used to find sentences containing both objects. The aforementioned approaches to selecting compared objects pairs tend minimize inclusion of the domain specific data, but do not solve the problem fully though. We keep open a question of extending dataset with diverse object pairs including abstract concepts for future work. As for the sentence mining, we used the publicly available index of dependency-parsed sentences from the Common Crawl corpus containing over 14 billion English sentences filtered for duplicates. This index was queried for sentences containing both objects of each pair. For 90% of the pairs, we also added comparative cue words (better, easier, faster, nicer, wiser, cooler, decent, safer, superior, solid, terrific, worse, harder, slower, poorly, uglier, poorer, lousy, nastier, inferior, mediocre) to the query in order to bias the selection towards comparisons but at the same time admit comparisons that do not contain any of the anticipated cues. This was necessary as a random sampling would have resulted in only a very tiny fraction of comparisons. Note that even sentences containing a cue word do not necessarily express a comparison between the desired targets (dog vs. cat: He's the best pet that you can get, better than a dog or cat.). It is thus especially crucial to enable a classifier to learn not to rely on the existence of clue words only (very likely in a random sample of sentences with very few comparisons). For our corpus, we keep pairs with at least 100 retrieved sentences.From all sentences of those pairs, 2500 for each category were randomly sampled as candidates for a crowdsourced annotation that we conducted on figure-eight.com in several small batches. Each sentence was annotated by at least five trusted workers. We ranked annotations by confidence, which is the figure-eight internal measure of combining annotator trust and voting, and discarded annotations with a confidence below 50%. Of all annotated items, 71% received unanimous votes and for over 85% at least 4 out of 5 workers agreed -- rendering the collection procedure aimed at ease of annotation successful.The final dataset contains 7199 sentences with 271 distinct object pairs. The majority of sentences (over 72%) are non-comparative despite biasing the selection with cue words; in 70% of the comparative sentences, the favored target is named first.You can browse though the data here: https://docs.google.com/spreadsheets/d/1U8i6EU9GUKmHdPnfwXEuBxi0h3aiRCLPRC-3c9ROiOE/edit?usp=sharing Full description of the dataset is available in the workshop paper at ACL 2019 conference. Please cite this paper if you use the data: Franzek, Mirco, Alexander Panchenko, and Chris Biemann. ""Categorization of Comparative Sentences for Argument Mining."" arXiv preprint arXiv:1809.06152 (2018).@inproceedings{franzek2018categorization, title={Categorization of Comparative Sentences for Argument Mining}, author={Panchenko, Alexander and Bondarenko, and Franzek, Mirco and Hagen, Matthias and Biemann, Chris}, booktitle={Proceedings of the 6th Workshop on Argument Mining at ACL'2019}, year={2019}, address={Florence, Italy}}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The labels: “F” and “T” present the student answering the exercise incorrectly or correctly.
JAMIO-2017-0039.R2This data file contains data of de-identified client_IDs, codes for Omaha System problem concepts, related strength-indicators, signs/symptoms, Knowledge, Behavior, and Status scores as well as a data dictionary for codes and terms.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
With health care policy directives advancing value-based care, risk assessments and management have permeated health care discourse. The conventional problem-based infrastructure defines what data are employed to build this discourse and how it unfolds. Such a health care model tends to bias data for risk assessment and risk management toward problems and does not capture data about health assets or strengths. The purpose of this article is to explore and illustrate the incorporation of a strengths-based data capture model into risk assessment and management by harnessing data-driven and person-centered health assets using the Omaha System. This strengths-based data capture model encourages and enables use of whole-person data including strengths at the individual level and, in aggregate, at the population level. When aggregated, such data may be used for the development of strengths-based population health metrics that will promote evaluation of data-driven and person-centered care, outcomes, and value.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Short-term mining planning typically relies on samples obtained from channels or less-accurate sampling methods. The results may include larger sampling errors than those derived from diamond drill hole core samples. The aim of this paper is to evaluate the impact of the sampling error on grade estimation and propose a method of correcting the imprecision and bias in the soft data. In addition, this paper evaluates the benefits of using soft data in mining planning. These concepts are illustrated via a gold mine case study, where two different data types are presented. The study used Au grades collected via diamond drilling (hard data) and channels (soft data). Four methodologies were considered for estimation of the Au grades of each block to be mined: ordinary kriging with hard and soft data pooled without considering differences in data quality; ordinary kriging with only hard data; standardized ordinary kriging with pooled hard and soft data; and standardized, ordinary cokriging. The results show that even biased samples collected using poor sampling protocols improve the estimates more than a limited number of precise and unbiased samples. A welldesigned estimation method corrects the biases embedded in the samples, mitigating their propagation to the block model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The overview of ASSIST2012 and Eedi based on different intelligent tutoring systems.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
R code for counting occurrences of gene names, symbols or synonyms in abstracts. (HTML)
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Observer bias and other “experimenter effects” occur when researchers’ expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work “blind,” meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.
Usage Notes Evolution literature review dataExact p value datasetjournal_categoriesp values data 24 SeptProportion of significant p values per paperR script to filter and classify the p value dataQuiz answers - guessing effect size from abstractsThe answers provided by the 9 evolutionary biologists to quiz we designed, which aimed to test whether trained specialists are able to infer the relative size/direction of effect size from a paper's title and abstract.readmeDescription of the contents of all the other files in this Dryad submission.R script to statistically analyse the p value dataR script detailing the statistical analyses we performed on the p value datasets.