The "SPD LTDS Appendix 2 Transformer Data (Table 2)" dataset provides the equipment parameter data for each group of transformers on the SP Distribution (SPD) system. The insertion of N/K in the table indicates where a parameter has not been verified.The column titled "Reverse Rating" shows the reverse power flow capability of the tap changer.Click here to access our full Long Term Development Statements for both SP Distribution (SPD) & SP Manweb (SPM). For additional information on column definitions, please click on the Dataset schema link below.Note: A fully formatted copy of this appendix can be downloaded from the Export tab under Alternative exports.Download dataset metadata (JSON)If you wish to provide feedback at a dataset or row level, please click on the “Feedback” tab above. Data Triage:As part of our commitment to enhancing the transparency, and accessibility of the data we share, we publish the results of our Data Triage process.Our Data Triage documentation includes our Risk Assessments; detailing any controls we have implemented to prevent exposure of sensitive information. Click here to access the Data Triage documentation for the Long Term Development Statement dataset.To access our full suite of Data Triage documentation, visit the SP Energy Networks Data & Information page.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset for "Towards Signed Distance Function based Metamaterial Design: Neural Operator Transformer for Forward Prediction and Diffusion Model for Inverse Design", include geometries, corresponding stress-strain curves, Mises stress field, displacement field, sign distance functions, boundary points cloud, mesh nodes, mesh cell connetivity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recently, a considerable number of studies in computer vision involve deep neural architectures called vision transformers. Visual processing in these models incorporates computational models that are claimed to implement attention mechanisms. Despite an increasing body of work that attempts to understand the role of attention mechanisms in vision transformers, their effect is largely unknown. Here, we asked if the attention mechanisms in vision transformers exhibit similar effects as those known in human visual attention. To answer this question, we revisited the attention formulation in these models and found that despite the name, computationally, these models perform a special class of relaxation labeling with similarity grouping effects. Additionally, whereas modern experimental findings reveal that human visual attention involves both feed-forward and feedback mechanisms, the purely feed-forward architecture of vision transformers suggests that attention in these models cannot have the same effects as those known in humans. To quantify these observations, we evaluated grouping performance in a family of vision transformers. Our results suggest that self-attention modules group figures in the stimuli based on similarity of visual features such as color. Also, in a singleton detection experiment as an instance of salient object detection, we studied if these models exhibit similar effects as those of feed-forward visual salience mechanisms thought to be utilized in human visual attention. We found that generally, the transformer-based attention modules assign more salience either to distractors or the ground, the opposite of both human and computational salience. Together, our study suggests that the mechanisms in vision transformers perform perceptual organization based on feature similarity and not attention.
This dataset contains the Burgers' equation and Darcy flow benchmarks featured in several operator learning tasks involving parametric partial differential equations: for \(a\in \mathcal{A}\)
\[L_a (u) = f \quad \text{ in } \Omega \subset\mathbb{R}^m,\]
with appropriate boundary or initial conditions. The tasks are
The datasets are given in the form of matlab
file. The dataset can be loaded to torch.Tensor
format using the following snippets. The first index is the samples; the rest dimensions contains the spacial discretizations of the data and the targets. data_path
specified by user.
PDE for \(u \in H^1((0,2\pi)) \cap C^0(\mathbb{S}^1)\) \[ \frac {\partial u}{\partial t} + u \frac{\partial u}{\partial x} = u \frac {\partial ^{2}u}{\partial x^{2}} \quad \text{ for } (x,t)\in (0,2\pi) \times (0,1],\] Initial condition \(u(x, 0) = u_0(x) \text{ for } x\in (0,2\pi)\).
data = loadmat(data_path)
x_data = data['a'] # input
y_data = data['u'] # target
In a forward problem, a
is the input and u
is the solution (target).
In an inverse problem, u
is the input and a
is the target.
PDE: the coefficient \(a\in L^{\infty}(\Omega)\), \(u \in H^1_0(\Omega)\) \[- abla \cdot( a(x) abla u ) = 1\quad \text{in }\; (0,1)^2,\] with a zero Dirichlet boundary condition.
from scipy.io import loadmat
data = loadmat(data_path)
a = data['coeff']
u = data['sol']
The dataset is owned by Zongyi Li and the license is MIT: https://github.com/zongyi-li/fourier_neural_operator
@misc{li2020fourier,
title={Fourier Neural Operator for Parametric Partial Differential Equations},
author={Zongyi Li and Nikola Kovachki and Kamyar Azizzadenesheli and Burigede Liu and Kaushik Bhattacharya and Andrew Stuart and Anima Anandkumar},
year={2020},
eprint={2010.08895},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{li2020neural,
title={Neural Operator: Graph Kernel Network for Partial Differential Equations},
author={Zongyi Li and Nikola Kovachki and Kamyar Azizzadenesheli and Burigede Liu and Kaushik Bhattacharya and Andrew Stuart and Anima Anandkumar},
year={2020},
eprint={2003.03485},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{cao2021,
title={Choose a Transformer: Fourier or Galerkin},
author={Shuhao Cao},
year={2021},
eprint={2105.14995},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The "SPD LTDS Appendix 2 Transformer Data (Table 2)" dataset provides the equipment parameter data for each group of transformers on the SP Distribution (SPD) system. The insertion of N/K in the table indicates where a parameter has not been verified.The column titled "Reverse Rating" shows the reverse power flow capability of the tap changer.Click here to access our full Long Term Development Statements for both SP Distribution (SPD) & SP Manweb (SPM). For additional information on column definitions, please click on the Dataset schema link below.Note: A fully formatted copy of this appendix can be downloaded from the Export tab under Alternative exports.Download dataset metadata (JSON)If you wish to provide feedback at a dataset or row level, please click on the “Feedback” tab above. Data Triage:As part of our commitment to enhancing the transparency, and accessibility of the data we share, we publish the results of our Data Triage process.Our Data Triage documentation includes our Risk Assessments; detailing any controls we have implemented to prevent exposure of sensitive information. Click here to access the Data Triage documentation for the Long Term Development Statement dataset.To access our full suite of Data Triage documentation, visit the SP Energy Networks Data & Information page.