MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full.
crystina-z/xor-tydi dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
OLS Regression Results—Fraction of mutants losing XOR vs number of XOR-only functional sites.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Relation between the actual probability associated with each type of input pattern in one training set and a typical network’s responses to the patterns.
There are 6 datasets generated from k-XOR PUF ranged from 2 to 7. The PUFs were installed on the Artix-7 chip. Those datasets could be used for XOR-PUF performance and security evaluation.
Each file has four columns as follows: 1- Shows the current iteration of the corresponding CRP. 2- CRP counter of the current iteration. 3- The input challenge in hexadecimal (16 hexadecimal digits). 4- The response of the given challenge repeated 32 times (8 hexadecimal digits).
The files contain of 60K CRPs repeated over 32 iterations.
Please cite and check the following paper for more details: [1] Mursi, Khalid T., Yu Zhuang, Mohammed Saeed Alkatheiri, and Ahmad O. Aseeri. "Extensive Examination of XOR Arbiter PUFs as Security Primitives for Resource-Constrained IoT Devices." In 2019 17th International Conference on Privacy, Security and Trust (PST), pp. 1-9. IEEE, 2019.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset Card for "tydi_xor_rc"
Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages. XORQA is an extension of the original TyDi QA dataset to also include unanswerable questions, where context documents are only in English but questions are in 7 languages. XOR-AttriQA contains annotated attribution data for a sample of XORQA. This dataset is a combined and simplified version of the Reading Comprehension data from XORQA and… See the full description on the dataset page: https://huggingface.co/datasets/coastalcph/tydi_xor_rc.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mean R2 (with standard deviations) between network responses and actual probabilities for 16 different input patterns in each of the four types of training sets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Swarm intelligence algorithms (SI) have an excellent ability to search for the optimal solution and they are applying two mechanisms during the search. The first mechanism is exploration, to explore a vast area in the search space, and when they found a promising area they switch from the exploration to the exploitation mechanism. A good SI algorithm can balance the exploration and the exploitation mechanism. In this paper, we propose a modified version of the chimp optimization algorithm (ChOA) to train a feed-forward neural network (FNN). The proposed algorithm is called a modified weighted chimp optimization algorithm (MWChOA). The main drawback of the standard ChOA and the weighted chimp optimization algorithm (WChOA) is they can be trapped in local optima because most of the solutions update their positions based on the position of the four leader solutions in the population. In the proposed algorithm, we reduced the number of leader solutions from four to three, and we found that reducing the number of leader solutions enhances the search and increases the exploration phase in the proposed algorithm, and avoids trapping in local optima. We test the proposed algorithm on the Eleven dataset and compare it against 16 SI algorithms. The results show that the proposed algorithm can achieve success to train the FNN when compare to the other SI algorithms.
OLS Regression Results—Fraction of mutants losing EQU and XOR vs number of overlapping functional sites.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Thanks for your interest in our work!
In order to facilitate your assessment and replication, we provides the dataset and source codes (verilog/python model/matlab) of our work (OIPUF) here.
By the way, our latest work (SOI PUF and cSOI PUF) published in IEEE TIFS (2024) is based on OIPUF.
If you have any questions, please feel free to contact with us: chongyaoxu@126.com / mklaw@um.edu.mo
Full text about OIPUF can be downloaded from https://ieeexplore.ieee.org/document/10103139
Full text about SOI PUF and cSOI PUF can be downloaded from https://ieeexplore.ieee.org/document/10458688
Source code and FPGA project of SOI PUF and cSOI PUF can be download from https://github.com/yg99992/SOI_PUF.
Matlab code
matlab/Generate_OI_block.m
This is a matlab manuscript used for generating the verilog code of random OI block.
matlab/OIPUF_64x4_placement.m
This is a matlab function used for generating XDC file for constraining the placement of (64,4)-OI block
matlab/OIPUF_64x8_placement.m
This is a matlab function used for generating XDC file for constraining the placement of (64,8)-OI block
matlab/OIPUF_placement_example.m
An example manuscript used for demonstrating the usage of OIPUF_64x4_placement.m and OIPUF_64x8_placement.m
Python code
python/puf_models.py
The python models of XOR PUFs and OIPUFs, which can be used to generate CRPs.
for example:
from puf_models import oi_puf
# generate a (64,4)-OIPUF and further use the generated OIPUF to generate 1M CRPs
crps, puf_instance = oi_puf.gen_CRPs_PUF(64, 4, 1_000_000)
python/attack_pypuf.py
A manuscript used to conduct to ANN attack on XOR PUF and OIPUF ('pypuf' package should be installed correctly).
Verilog code
verilog/OIPUF_64_4/
All the verilog files of (64, 4)-OIPUF
verilog/OIPUF_64_8/
All the verilog files of (64, 8)-OIPUF
CRP datasets extracted from FPGA
It consists of 13 CRP files (All the CRPs are extracted from FPGA):
FPGA_CRPs/FPGA3_CHAL_100M.csv
The 100 million 64-bit challenges
FPGA_CRPs/FPGA3_k4_PUF0.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF0
FPGA_CRPs/FPGA3_k4_PUF1.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF1
FPGA_CRPs/FPGA3_k4_PUF2.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF2
FPGA_CRPs/FPGA3_k4_PUF3.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF3
FPGA_CRPs/FPGA3_k4_PUF4.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF4
FPGA_CRPs/FPGA3_k4_PUF5.csv
The 100 million 1-bit responses extracted from (64,4)-OIPUF5
FPGA_CRPs/FPGA3_k8_PUF0.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF0
FPGA_CRPs/FPGA3_k8_PUF1.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF1
FPGA_CRPs/FPGA3_k8_PUF2.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF2
FPGA_CRPs/FPGA3_k8_PUF3.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF3
FPGA_CRPs/FPGA3_k8_PUF4.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF4
FPGA_CRPs/FPGA3_k8_PUF5.csv
The 100 million 1-bit responses extracted from (64,8)-OIPUF5
Tevatron/xor-tydi-corpus dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Simple logic gate circuit with 2 inputs and 7 mixed operators (AND, OR, NOT, NAND, NOR, XOR, XNOR).
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Included are in both .txt and .csv format the data generated from the simulated 16 and 32 stage Arbiter PUFs (Normal APUF, Feed-Forward APUF and XOR APUF) used to to perform the experimental work in Chapter 4 of the PhD Thesis 'Leveraging DRAM-based Physically Unclonable Functions for Enhancing Authentication in Resource-Constrained Applications' by Owen Millwood (also in publication [1]). C_Origin provides the initial set of challenges input to the scheme, Output gives the two (for 16 stage) and four (for 32 stage) bit outputs.This dataset is compatible with the code found using the following DOI: 10.15131/shef.data.27095215[1] O. Millwood, M. K. PehlivanoÄźlu, A. Mohammadi Pasikhani, J. Miskelly, P. Gope and E. B. Kavun, "A Generic Obfuscation Framework for Preventing ML-Attacks on Strong-PUFs through Exploitation of DRAM-PUFs," 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), Delft, Netherlands, 2023, pp. 92-106, doi: 10.1109/EuroSP57164.2023.00015.
The ability of humans to intentionally learn, without feedback, unidimensional stimulus relations in categorization tasks has been empirically established over the past two decades. However, whether observers can learn more complex multidimensional stimulus relations across these unsupervised tasks has not yet been determined. We demonstrate across an unsupervised concept completion experiment that the failure to observe multidimensional learning in previous experiments may be attributable to factors such as increased stimulus or task complexity. We posit that concept completion is related to category learning in that it reveals the underlying tendencies that are associated with some categories being easier to learn than others. In our experiments, we found observers readily learned to complete a two-dimensional exclusive-or concept, evidenced by an increase in object selection as the task progressed with a decrease in choice response times. We also found that ob..., , # Data from: Constructing concepts without feedback: An empirical investigation of how relational information affects multidimensional concept completion behavior in an unsupervised task
Dataset DOI: 10.5061/dryad.ht76hdrtk
This dataset contains the raw and processed data for the pilot experiment and the main experiment reported in Doan and Vigo (In Press, PLOS One), for assessing whether individuals display an increase in multidimensional concept completion behavior across trials of the concept completion task used by the authors.Ă‚ Data cover six repeated-measures conditions of concept completions for the pilot experiment (1D-XOR-C3D, 1D-C3D-XOR, XOR-1D-C3D, XOR-C3D-1D, C3D-1D-XOR, C3D-XOR-1D) and two between-subjects conditions of concept completions for the main experiment (XOR-XOR, C3D-C3D).Ă‚ For each experiment, we include data associated with the series of objects seen by participants (partial 1D, XOR, or ..., Participants provided written and verbal informed consent. Each participant was assigned a unique number, which was neither attached to nor stored with their name. There is no existing document that links the number with any participant's name, making it impossible to determine whose data belongs to any participant.
Procedurally Generated Matrices (PGM) data from the paper Measuring Abstract Reasoning in Neural Networks, Barrett, Hill, Santoro et al. 2018. The goal is to infer the correct answer from the context panels based on abstract reasoning.
To use this data set, please download all the *.tar.gz files from the data set page and place them in ~/tensorflow_datasets/abstract_reasoning/.
\(R\) denotes the set of relation types (progression, XOR, OR, AND, consistent union), \(O\) denotes the object types (shape, line), and \(A\) denotes the attribute types (size, colour, position, number). The structure of a matrix, \(S\), is the set of triples \(S={[r, o, a]}\) that determine the challenge posed by a particular matrix.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('abstract_reasoning', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The detailed amounts of involved DNA molecules in XOR logic gate.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Laboratory and echocardiographic data of group 2 patients by XOR activity quartile.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
XOR function.
https://www.bitget.com/pl/price/xorhttps://www.bitget.com/pl/price/xor
Śledzenie historii cen SORA umożliwia traderom kryptowalut łatwe monitorowanie wyników ich inwestycji. Możesz na bieżąco wygodnie śledzić nie tylko wartość otwarcia, maksimum i zamknięcie dla SORA, ale także wolumen transakcji. Ponadto możesz natychmiast wyświetlić dzienną zmianę jako wartość procentową, co ułatwia identyfikację dni ze znacznymi wahaniami. Zgodnie z naszymi danymi dotyczącymi historii cen SORA, wartość wzrosła do rekordowego szczytu w 2021-12-20 roku, przekraczając $17,573.88 USD. Z drugiej strony, najniższy punkt w trajektorii cenowej SORA, powszechnie określany jako „SORA all-time low”, wystąpił w dniu 2025-04-06. Gdyby ktoś kupił SORA w tym czasie, obecnie cieszyłby się znaczącym zyskiem w wysokości 10,758%. Z założenia, SORA nie ma ograniczenia w całkowitej podaży. Podaż w obiegu SORA wynosi obecnie 0 monet. Wszystkie ceny podane na tej stronie pochodzą z wiarygodnego źródła Bitget. Ważne jest, aby polegać na jednym źródle w celu sprawdzenia inwestycji, ponieważ wartości mogą się różnić w zależności od sprzedawcy. Nasz historyczny zbiór danych cenowych SORA obejmuje zakres w odstępach 1 minuty, 1 dnia, 1 tygodnia i 1 miesiąca (otwarcia/maksimum/minimum/zamknięcia/wolumenu). Te zbiory danych zostały poddane rygorystycznym testom w celu zapewnienia spójności, kompletności i dokładności. Są one specjalnie zaprojektowane w celach symulacji handlu i testów historycznych, łatwo dostępne do bezpłatnego pobrania i aktualizowane w czasie rzeczywistym.
A hierarchically ordered distribution of 3D-points was created with matlab. It contains 120,000 datapoints in five hierarchical levels with one to four child nodes per parent. Data values for the three axes range betwwen 0 and 1. The structure can be seen in the attached figure. In each hierarchical level different distributions of datapoints are implemented. This allows to test classifiers under various conditions. The most common distribution in the dataset is a simple gaussian distributed point cloud. Other sampled distributions are a spherical distribution (sphere in 3D), or a circular (donut) distribution along different axes. XOR distributions are implemented in different patterns, e.g. four batches with crossed classes or eight batches with two or four classes. The most complex data distribution is the springroll, where the datapoints are intertwined into one another. To create indistinguishable cases, where the prediction of a classifier is supposed to perform bad, some datapoints are just randomly intermixed with another class.The .csv-file contains four columns: label | x-coordinate | y-coordinate | z-coordinateThe label for each sample provides all hierarchical information needed. Each label is composed of five digits, one for each hierarchical level. As an example:Sample '11421':Hierarchical level 1: class 1Hierarchical level 2: class 1Hierarchical level 3: class 4Hierarchical level 4: class 2Hierarchical level 5: class 1
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
XOR-TyDi QA brings together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections. There are three sub-tasks: XOR-Retrieve, XOR-EnglishSpan, and XOR-Full.